Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
1.
Nat Commun ; 15(1): 2026, 2024 Mar 11.
Artigo em Inglês | MEDLINE | ID: mdl-38467600

RESUMO

Timely detection of Barrett's esophagus, the pre-malignant condition of esophageal adenocarcinoma, can improve patient survival rates. The Cytosponge-TFF3 test, a non-endoscopic minimally invasive procedure, has been used for diagnosing intestinal metaplasia in Barrett's. However, it depends on pathologist's assessment of two slides stained with H&E and the immunohistochemical biomarker TFF3. This resource-intensive clinical workflow limits large-scale screening in the at-risk population. To improve screening capacity, we propose a deep learning approach for detecting Barrett's from routinely stained H&E slides. The approach solely relies on diagnostic labels, eliminating the need for expensive localized expert annotations. We train and independently validate our approach on two clinical trial datasets, totaling 1866 patients. We achieve 91.4% and 87.3% AUROCs on discovery and external test datasets for the H&E model, comparable to the TFF3 model. Our proposed semi-automated clinical workflow can reduce pathologists' workload to 48% without sacrificing diagnostic performance, enabling pathologists to prioritize high risk cases.


Assuntos
Adenocarcinoma , Esôfago de Barrett , Aprendizado Profundo , Neoplasias Esofágicas , Humanos , Esôfago de Barrett/diagnóstico , Esôfago de Barrett/patologia , Neoplasias Esofágicas/diagnóstico , Neoplasias Esofágicas/patologia , Adenocarcinoma/diagnóstico , Adenocarcinoma/patologia , Metaplasia
2.
Nat Commun ; 13(1): 1161, 2022 03 04.
Artigo em Inglês | MEDLINE | ID: mdl-35246539

RESUMO

Imperfections in data annotation, known as label noise, are detrimental to the training of machine learning models and have a confounding effect on the assessment of model performance. Nevertheless, employing experts to remove label noise by fully re-annotating large datasets is infeasible in resource-constrained settings, such as healthcare. This work advocates for a data-driven approach to prioritising samples for re-annotation-which we term "active label cleaning". We propose to rank instances according to estimated label correctness and labelling difficulty of each sample, and introduce a simulation framework to evaluate relabelling efficacy. Our experiments on natural images and on a specifically-devised medical imaging benchmark show that cleaning noisy labels mitigates their negative impact on model training, evaluation, and selection. Crucially, the proposed approach enables correcting labels up to 4 × more effectively than typical random selection in realistic conditions, making better use of experts' valuable time for improving dataset quality.


Assuntos
Diagnóstico por Imagem , Aprendizado de Máquina , Benchmarking , Curadoria de Dados , Atenção à Saúde
3.
JAMA Netw Open ; 3(11): e2027426, 2020 11 02.
Artigo em Inglês | MEDLINE | ID: mdl-33252691

RESUMO

Importance: Personalized radiotherapy planning depends on high-quality delineation of target tumors and surrounding organs at risk (OARs). This process puts additional time burdens on oncologists and introduces variability among both experts and institutions. Objective: To explore clinically acceptable autocontouring solutions that can be integrated into existing workflows and used in different domains of radiotherapy. Design, Setting, and Participants: This quality improvement study used a multicenter imaging data set comprising 519 pelvic and 242 head and neck computed tomography (CT) scans from 8 distinct clinical sites and patients diagnosed either with prostate or head and neck cancer. The scans were acquired as part of treatment dose planning from patients who received intensity-modulated radiation therapy between October 2013 and February 2020. Fifteen different OARs were manually annotated by expert readers and radiation oncologists. The models were trained on a subset of the data set to automatically delineate OARs and evaluated on both internal and external data sets. Data analysis was conducted October 2019 to September 2020. Main Outcomes and Measures: The autocontouring solution was evaluated on external data sets, and its accuracy was quantified with volumetric agreement and surface distance measures. Models were benchmarked against expert annotations in an interobserver variability (IOV) study. Clinical utility was evaluated by measuring time spent on manual corrections and annotations from scratch. Results: A total of 519 participants' (519 [100%] men; 390 [75%] aged 62-75 years) pelvic CT images and 242 participants' (184 [76%] men; 194 [80%] aged 50-73 years) head and neck CT images were included. The models achieved levels of clinical accuracy within the bounds of expert IOV for 13 of 15 structures (eg, left femur, κ = 0.982; brainstem, κ = 0.806) and performed consistently well across both external and internal data sets (eg, mean [SD] Dice score for left femur, internal vs external data sets: 98.52% [0.50] vs 98.04% [1.02]; P = .04). The correction time of autogenerated contours on 10 head and neck and 10 prostate scans was measured as a mean of 4.98 (95% CI, 4.44-5.52) min/scan and 3.40 (95% CI, 1.60-5.20) min/scan, respectively, to ensure clinically accepted accuracy. Manual segmentation of the head and neck took a mean 86.75 (95% CI, 75.21-92.29) min/scan for an expert reader and 73.25 (95% CI, 68.68-77.82) min/scan for a radiation oncologist. The autogenerated contours represented a 93% reduction in time. Conclusions and Relevance: In this study, the models achieved levels of clinical accuracy within expert IOV while reducing manual contouring time and performing consistently well across previously unseen heterogeneous data sets. With the availability of open-source libraries and reliable performance, this creates significant opportunities for the transformation of radiation treatment planning.


Assuntos
Aprendizado Profundo/estatística & dados numéricos , Neoplasias de Cabeça e Pescoço/radioterapia , Neoplasias da Próstata/radioterapia , Radioterapia Guiada por Imagem/instrumentação , Idoso , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Humanos , Masculino , Pessoa de Meia-Idade , Redes Neurais de Computação , Variações Dependentes do Observador , Órgãos em Risco/efeitos da radiação , Neoplasias da Próstata/diagnóstico por imagem , Melhoria de Qualidade/normas , Radioterapia Guiada por Imagem/métodos , Radioterapia de Intensidade Modulada/métodos , Reprodutibilidade dos Testes , Tomografia Computadorizada por Raios X/métodos
4.
Sci Rep ; 10(1): 2408, 2020 02 12.
Artigo em Inglês | MEDLINE | ID: mdl-32051456

RESUMO

In large population studies such as the UK Biobank (UKBB), quality control of the acquired images by visual assessment is unfeasible. In this paper, we apply a recently developed fully-automated quality control pipeline for cardiac MR (CMR) images to the first 19,265 short-axis (SA) cine stacks from the UKBB. We present the results for the three estimated quality metrics (heart coverage, inter-slice motion and image contrast in the cardiac region) as well as their potential associations with factors including acquisition details and subject-related phenotypes. Up to 14.2% of the analysed SA stacks had sub-optimal coverage (i.e. missing basal and/or apical slices), however most of them were limited to the first year of acquisition. Up to 16% of the stacks were affected by noticeable inter-slice motion (i.e. average inter-slice misalignment greater than 3.4 mm). Inter-slice motion was positively correlated with weight and body surface area. Only 2.1% of the stacks had an average end-diastolic cardiac image contrast below 30% of the dynamic range. These findings will be highly valuable for both the scientists involved in UKBB CMR acquisition and for the ones who use the dataset for research purposes.


Assuntos
Técnicas de Imagem Cardíaca , Coração/diagnóstico por imagem , Idoso , Bancos de Espécimes Biológicos , Técnicas de Imagem Cardíaca/métodos , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Masculino , Pessoa de Meia-Idade , Controle de Qualidade , Reino Unido
5.
IEEE Trans Med Imaging ; 39(6): 2088-2099, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-31944949

RESUMO

Quantification of anatomical shape changes currently relies on scalar global indexes which are largely insensitive to regional or asymmetric modifications. Accurate assessment of pathology-driven anatomical remodeling is a crucial step for the diagnosis and treatment of many conditions. Deep learning approaches have recently achieved wide success in the analysis of medical images, but they lack interpretability in the feature extraction and decision processes. In this work, we propose a new interpretable deep learning model for shape analysis. In particular, we exploit deep generative networks to model a population of anatomical segmentations through a hierarchy of conditional latent variables. At the highest level of this hierarchy, a two-dimensional latent space is simultaneously optimised to discriminate distinct clinical conditions, enabling the direct visualisation of the classification space. Moreover, the anatomical variability encoded by this discriminative latent space can be visualised in the segmentation space thanks to the generative properties of the model, making the classification task transparent. This approach yielded high accuracy in the categorisation of healthy and remodelled left ventricles when tested on unseen segmentations from our own multi-centre dataset as well as in an external validation set, and on hippocampi from healthy controls and patients with Alzheimer's disease when tested on ADNI data. More importantly, it enabled the visualisation in three-dimensions of both global and regional anatomical features which better discriminate between the conditions under exam. The proposed approach scales effectively to large populations, facilitating high-throughput analysis of normal anatomy and pathology in large-scale studies of volumetric imaging.


Assuntos
Doença de Alzheimer , Imageamento por Ressonância Magnética , Doença de Alzheimer/diagnóstico por imagem , Hipocampo , Humanos
6.
IEEE Trans Med Imaging ; 38(12): 2755-2767, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31021795

RESUMO

Detecting acoustic shadows in ultrasound images is important in many clinical and engineering applications. Real-time feedback of acoustic shadows can guide sonographers to a standardized diagnostic viewing plane with minimal artifacts and can provide additional information for other automatic image analysis algorithms. However, automatically detecting shadow regions using learning-based algorithms is challenging because pixel-wise ground truth annotation of acoustic shadows is subjective and time consuming. In this paper, we propose a weakly supervised method for automatic confidence estimation of acoustic shadow regions. Our method is able to generate a dense shadow-focused confidence map. In our method, a shadow-seg module is built to learn general shadow features for shadow segmentation, based on global image-level annotations as well as a small number of coarse pixel-wise shadow annotations. A transfer function is introduced to extend the obtained binary shadow segmentation to a reference confidence map. In addition, a confidence estimation network is proposed to learn the mapping between input images and the reference confidence maps. This network is able to predict shadow confidence maps directly from input images during inference. We use evaluation metrics such as DICE, inter-class correlation, and so on, to verify the effectiveness of our method. Our method is more consistent than human annotation and outperforms the state-of-the-art quantitatively in shadow segmentation and qualitatively in confidence estimation of shadow regions. Furthermore, we demonstrate the applicability of our method by integrating shadow confidence maps into tasks such as ultrasound image classification, multi-view image fusion, and automated biometric measurements.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina Supervisionado , Ultrassonografia Pré-Natal/métodos , Algoritmos , Aprendizado Profundo , Feminino , Feto/diagnóstico por imagem , Humanos , Gravidez
7.
J Cardiovasc Magn Reson ; 21(1): 18, 2019 03 14.
Artigo em Inglês | MEDLINE | ID: mdl-30866968

RESUMO

BACKGROUND: The trend towards large-scale studies including population imaging poses new challenges in terms of quality control (QC). This is a particular issue when automatic processing tools such as image segmentation methods are employed to derive quantitative measures or biomarkers for further analyses. Manual inspection and visual QC of each segmentation result is not feasible at large scale. However, it is important to be able to automatically detect when a segmentation method fails in order to avoid inclusion of wrong measurements into subsequent analyses which could otherwise lead to incorrect conclusions. METHODS: To overcome this challenge, we explore an approach for predicting segmentation quality based on Reverse Classification Accuracy, which enables us to discriminate between successful and failed segmentations on a per-cases basis. We validate this approach on a new, large-scale manually-annotated set of 4800 cardiovascular magnetic resonance (CMR) scans. We then apply our method to a large cohort of 7250 CMR on which we have performed manual QC. RESULTS: We report results used for predicting segmentation quality metrics including Dice Similarity Coefficient (DSC) and surface-distance measures. As initial validation, we present data for 400 scans demonstrating 99% accuracy for classifying low and high quality segmentations using the predicted DSC scores. As further validation we show high correlation between real and predicted scores and 95% classification accuracy on 4800 scans for which manual segmentations were available. We mimic real-world application of the method on 7250 CMR where we show good agreement between predicted quality metrics and manual visual QC scores. CONCLUSIONS: We show that Reverse classification accuracy has the potential for accurate and fully automatic segmentation QC on a per-case basis in the context of large-scale population imaging as in the UK Biobank Imaging Study.


Assuntos
Coração/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/normas , Imageamento por Ressonância Magnética/normas , Automação , Humanos , Valor Preditivo dos Testes , Controle de Qualidade , Reprodutibilidade dos Testes , Reino Unido
8.
Med Image Anal ; 54: 1-9, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-30807894

RESUMO

Deep networks have set the state-of-the-art in most image analysis tasks by replacing handcrafted features with learned convolution filters within end-to-end trainable architectures. Still, the specifications of a convolutional network are subject to much manual design - the shape and size of the receptive field for convolutional operations is a very sensitive part that has to be tuned for different image analysis applications. 3D fully-convolutional multi-scale architectures with skip-connection that excel at semantic segmentation and landmark localisation have huge memory requirements and rely on large annotated datasets - an important limitation for wider adaptation in medical image analysis. We propose a novel and effective method based on trainable 3D convolution kernels that learns both filter coefficients and spatial filter offsets in a continuous space based on the principle of differentiable image interpolation first introduced for spatial transformer network. A deep network that incorporates this one binary extremely large and inflecting sparse kernel (OBELISK) filter requires fewer trainable parameters and less memory while achieving high quality results compared to fully-convolutional U-Net architectures on two challenging 3D CT multi-organ segmentation tasks. Extensive validation experiments indicate that the performance of sparse deformable convolutions is due to their ability to capture large spatial context with few expressive filter parameters and that network depth is not always necessary to learn complex shape and appearance features. A combination with conventional CNNs further improves the delineation of small organs with large shape variations and the fast inference time using flexible image sampling may offer new potential use cases for deep networks in computer-assisted, image-guided interventions.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional , Redes Neurais de Computação , Tomografia Computadorizada por Raios X , Abdome/diagnóstico por imagem , Algoritmos , Humanos , Vísceras/diagnóstico por imagem
9.
Med Image Anal ; 53: 197-207, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30802813

RESUMO

We propose a novel attention gate (AG) model for medical image analysis that automatically learns to focus on target structures of varying shapes and sizes. Models trained with AGs implicitly learn to suppress irrelevant regions in an input image while highlighting salient features useful for a specific task. This enables us to eliminate the necessity of using explicit external tissue/organ localisation modules when using convolutional neural networks (CNNs). AGs can be easily integrated into standard CNN models such as VGG or U-Net architectures with minimal computational overhead while increasing the model sensitivity and prediction accuracy. The proposed AG models are evaluated on a variety of tasks, including medical image classification and segmentation. For classification, we demonstrate the use case of AGs in scan plane detection for fetal ultrasound screening. We show that the proposed attention mechanism can provide efficient object localisation while improving the overall prediction performance by reducing false positives. For segmentation, the proposed architecture is evaluated on two large 3D CT abdominal datasets with manual annotations for multiple organs. Experimental results show that AG models consistently improve the prediction performance of the base architectures across different datasets and training sizes while preserving computational efficiency. Moreover, AGs guide the model activations to be focused around salient regions, which provides better insights into how model predictions are made. The source code for the proposed AG models is publicly available.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Redes Neurais de Computação , Radiografia Abdominal/métodos , Tomografia Computadorizada por Raios X/métodos , Ultrassonografia Pré-Natal/métodos , Algoritmos , Conjuntos de Dados como Assunto , Feminino , Humanos , Gravidez
10.
Med Image Anal ; 53: 156-164, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30784956

RESUMO

Automatic detection of anatomical landmarks is an important step for a wide range of applications in medical image analysis. Manual annotation of landmarks is a tedious task and prone to observer errors. In this paper, we evaluate novel deep reinforcement learning (RL) strategies to train agents that can precisely and robustly localize target landmarks in medical scans. An artificial RL agent learns to identify the optimal path to the landmark by interacting with an environment, in our case 3D images. Furthermore, we investigate the use of fixed- and multi-scale search strategies with novel hierarchical action steps in a coarse-to-fine manner. Several deep Q-network (DQN) architectures are evaluated for detecting multiple landmarks using three different medical imaging datasets: fetal head ultrasound (US), adult brain and cardiac magnetic resonance imaging (MRI). The performance of our agents surpasses state-of-the-art supervised and RL methods. Our experiments also show that multi-scale search strategies perform significantly better than fixed-scale agents in images with large field of view and noisy background such as in cardiac MRI. Moreover, the novel hierarchical steps can significantly speed up the searching process by a factor of 4-5 times.


Assuntos
Pontos de Referência Anatômicos , Encéfalo/diagnóstico por imagem , Aprendizado Profundo , Cabeça/diagnóstico por imagem , Coração/diagnóstico por imagem , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética/métodos , Adulto , Feminino , Cabeça/embriologia , Humanos , Gravidez
11.
IEEE Trans Med Imaging ; 38(5): 1127-1138, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-30403623

RESUMO

The effectiveness of a cardiovascular magnetic resonance (CMR) scan depends on the ability of the operator to correctly tune the acquisition parameters to the subject being scanned and on the potential occurrence of imaging artifacts, such as cardiac and respiratory motion. In the clinical practice, a quality control step is performed by visual assessment of the acquired images; however, this procedure is strongly operator-dependent, cumbersome, and sometimes incompatible with the time constraints in clinical settings and large-scale studies. We propose a fast, fully automated, and learning-based quality control pipeline for CMR images, specifically for short-axis image stacks. Our pipeline performs three important quality checks: 1) heart coverage estimation; 2) inter-slice motion detection; 3) image contrast estimation in the cardiac region. The pipeline uses a hybrid decision forest method-integrating both regression and structured classification models-to extract landmarks and probabilistic segmentation maps from both long- and short-axis images as a basis to perform the quality checks. The technique was tested on up to 3000 cases from the UK Biobank and on 100 cases from the UK Digital Heart Project and validated against manual annotations and visual inspections performed by expert interpreters. The results show the capability of the proposed pipeline to correctly detect incomplete or corrupted scans (e.g., on UK Biobank, sensitivity and specificity, respectively, 88% and 99% for heart coverage estimation and 85% and 95% for motion detection), allowing their exclusion from the analyzed dataset or the triggering of a new acquisition.


Assuntos
Técnicas de Imagem Cardíaca/métodos , Coração/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Imagem Cinética por Ressonância Magnética/métodos , Algoritmos , Técnicas de Imagem Cardíaca/normas , Humanos , Imagem Cinética por Ressonância Magnética/normas , Movimento/fisiologia , Controle de Qualidade
12.
J Cardiovasc Magn Reson ; 20(1): 65, 2018 09 14.
Artigo em Inglês | MEDLINE | ID: mdl-30217194

RESUMO

BACKGROUND: Cardiovascular resonance (CMR) imaging is a standard imaging modality for assessing cardiovascular diseases (CVDs), the leading cause of death globally. CMR enables accurate quantification of the cardiac chamber volume, ejection fraction and myocardial mass, providing information for diagnosis and monitoring of CVDs. However, for years, clinicians have been relying on manual approaches for CMR image analysis, which is time consuming and prone to subjective errors. It is a major clinical challenge to automatically derive quantitative and clinically relevant information from CMR images. METHODS: Deep neural networks have shown a great potential in image pattern recognition and segmentation for a variety of tasks. Here we demonstrate an automated analysis method for CMR images, which is based on a fully convolutional network (FCN). The network is trained and evaluated on a large-scale dataset from the UK Biobank, consisting of 4,875 subjects with 93,500 pixelwise annotated images. The performance of the method has been evaluated using a number of technical metrics, including the Dice metric, mean contour distance and Hausdorff distance, as well as clinically relevant measures, including left ventricle (LV) end-diastolic volume (LVEDV) and end-systolic volume (LVESV), LV mass (LVM); right ventricle (RV) end-diastolic volume (RVEDV) and end-systolic volume (RVESV). RESULTS: By combining FCN with a large-scale annotated dataset, the proposed automated method achieves a high performance in segmenting the LV and RV on short-axis CMR images and the left atrium (LA) and right atrium (RA) on long-axis CMR images. On a short-axis image test set of 600 subjects, it achieves an average Dice metric of 0.94 for the LV cavity, 0.88 for the LV myocardium and 0.90 for the RV cavity. The mean absolute difference between automated measurement and manual measurement is 6.1 mL for LVEDV, 5.3 mL for LVESV, 6.9 gram for LVM, 8.5 mL for RVEDV and 7.2 mL for RVESV. On long-axis image test sets, the average Dice metric is 0.93 for the LA cavity (2-chamber view), 0.95 for the LA cavity (4-chamber view) and 0.96 for the RA cavity (4-chamber view). The performance is comparable to human inter-observer variability. CONCLUSIONS: We show that an automated method achieves a performance on par with human experts in analysing CMR images and deriving clinically relevant measures.


Assuntos
Cardiopatias/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Imagem Cinética por Ressonância Magnética/métodos , Contração Miocárdica , Redes Neurais de Computação , Volume Sistólico , Função Ventricular Esquerda , Função Ventricular Direita , Idoso , Automação , Bases de Dados Factuais , Aprendizado Profundo , Feminino , Cardiopatias/fisiopatologia , Humanos , Masculino , Pessoa de Meia-Idade , Variações Dependentes do Observador , Valor Preditivo dos Testes , Reprodutibilidade dos Testes
13.
Int J Comput Assist Radiol Surg ; 13(9): 1311-1320, 2018 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-29850978

RESUMO

PURPOSE: Deep convolutional neural networks (DCNN) are currently ubiquitous in medical imaging. While their versatility and high-quality results for common image analysis tasks including segmentation, localisation and prediction is astonishing, the large representational power comes at the cost of highly demanding computational effort. This limits their practical applications for image-guided interventions and diagnostic (point-of-care) support using mobile devices without graphics processing units (GPU). METHODS: We propose a new scheme that approximates both trainable weights and neural activations in deep networks by ternary values and tackles the open question of backpropagation when dealing with non-differentiable functions. Our solution enables the removal of the expensive floating-point matrix multiplications throughout any convolutional neural network and replaces them by energy- and time-preserving binary operators and population counts. RESULTS: We evaluate our approach for the segmentation of the pancreas in CT. Here, our ternary approximation within a fully convolutional network leads to more than 90% memory reductions and high accuracy (without any post-processing) with a Dice overlap of 71.0% that comes close to the one obtained when using networks with high-precision weights and activations. We further provide a concept for sub-second inference without GPUs and demonstrate significant improvements in comparison with binary quantisation and without our proposed ternary hyperbolic tangent continuation. CONCLUSIONS: We present a key enabling technique for highly efficient DCNN inference without GPUs that will help to bring the advances of deep learning to practical clinical applications. It has also great promise for improving accuracies in large-scale medical data retrieval.


Assuntos
Algoritmos , Imageamento Tridimensional/métodos , Redes Neurais de Computação , Pâncreas/diagnóstico por imagem , Máquina de Vetores de Suporte , Tomografia Computadorizada por Raios X/métodos , Humanos
14.
IEEE Trans Med Imaging ; 37(2): 384-395, 2018 02.
Artigo em Inglês | MEDLINE | ID: mdl-28961105

RESUMO

Incorporation of prior knowledge about organ shape and location is key to improve performance of image analysis approaches. In particular, priors can be useful in cases where images are corrupted and contain artefacts due to limitations in image acquisition. The highly constrained nature of anatomical objects can be well captured with learning-based techniques. However, in most recent and promising techniques such as CNN-based segmentation it is not obvious how to incorporate such prior knowledge. State-of-the-art methods operate as pixel-wise classifiers where the training objectives do not incorporate the structure and inter-dependencies of the output. To overcome this limitation, we propose a generic training strategy that incorporates anatomical prior knowledge into CNNs through a new regularisation model, which is trained end-to-end. The new framework encourages models to follow the global anatomical properties of the underlying anatomy (e.g. shape, label structure) via learnt non-linear representations of the shape. We show that the proposed approach can be easily adapted to different analysis tasks (e.g. image enhancement, segmentation) and improve the prediction accuracy of the state-of-the-art models. The applicability of our approach is shown on multi-modal cardiac data sets and public benchmarks. In addition, we demonstrate how the learnt deep models of 3-D shapes can be interpreted and used as biomarkers for classification of cardiac pathologies.


Assuntos
Técnicas de Imagem Cardíaca/métodos , Imageamento Tridimensional/métodos , Redes Neurais de Computação , Algoritmos , Cardiomiopatias/diagnóstico por imagem , Bases de Dados Factuais , Coração/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética
15.
IEEE Trans Med Imaging ; 36(1): 332-342, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-28055830

RESUMO

Accurate localization of anatomical landmarks is an important step in medical imaging, as it provides useful prior information for subsequent image analysis and acquisition methods. It is particularly useful for initialization of automatic image analysis tools (e.g. segmentation and registration) and detection of scan planes for automated image acquisition. Landmark localization has been commonly performed using learning based approaches, such as classifier and/or regressor models. However, trained models may not generalize well in heterogeneous datasets when the images contain large differences due to size, pose and shape variations of organs. To learn more data-adaptive and patient specific models, we propose a novel stratification based training model, and demonstrate its use in a decision forest. The proposed approach does not require any additional training information compared to the standard model training procedure and can be easily integrated into any decision tree framework. The proposed method is evaluated on 1080 3D high-resolution and 90 multi-stack 2D cardiac cine MR images. The experiments show that the proposed method achieves state-of-the-art landmark localization accuracy and outperforms standard regression and classification based approaches. Additionally, the proposed method is used in a multi-atlas segmentation to create a fully automatic segmentation pipeline, and the results show that it achieves state-of-the-art segmentation accuracy.


Assuntos
Coração/diagnóstico por imagem , Árvores de Decisões , Humanos , Reprodutibilidade dos Testes
16.
IEEE Trans Med Imaging ; 36(2): 674-683, 2017 02.
Artigo em Inglês | MEDLINE | ID: mdl-27845654

RESUMO

In this paper, we propose DeepCut, a method to obtain pixelwise object segmentations given an image dataset labelled weak annotations, in our case bounding boxes. It extends the approach of the well-known GrabCut [1] method to include machine learning by training a neural network classifier from bounding box annotations. We formulate the problem as an energy minimisation problem over a densely-connected conditional random field and iteratively update the training targets to obtain pixelwise object segmentations. Additionally, we propose variants of the DeepCut method and compare those to a naïve approach to CNN training under weak supervision. We test its applicability to solve brain and lung segmentation problems on a challenging fetal magnetic resonance dataset and obtain encouraging results in terms of accuracy.


Assuntos
Redes Neurais de Computação , Algoritmos , Encéfalo , Humanos , Aumento da Imagem , Interpretação de Imagem Assistida por Computador , Aprendizado de Máquina , Imageamento por Ressonância Magnética , Método de Monte Carlo
17.
Front Pediatr ; 4: 133, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-28018895

RESUMO

Ultrasound is commonly thought to underestimate ventricular volumes compared to magnetic resonance imaging (MRI), although the reason for this and the spatial distribution of the volume difference is not well understood. In this paper, we use landmark-based image registration to spatially align MRI and ultrasound images from patients with hypoplastic left heart syndrome and carry out a qualitative and quantitative spatial comparison of manual segmentations of the ventricular volume obtained from the respective modalities. In our experiments, we have found a trend showing volumes estimated from ultrasound to be smaller than those obtained from MRI (by approximately up to 20 ml), and that important contributors to this difference are the presence of artifacts such as shadows in the echo images and the different criteria to include or exclude image features as part of the ventricular volume.

18.
IEEE Trans Med Imaging ; 35(4): 967-77, 2016 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-26625409

RESUMO

Real-time 3D Echocardiography (RT3DE) has been proven to be an accurate tool for left ventricular (LV) volume assessment. However, identification of the LV endocardium remains a challenging task, mainly because of the low tissue/blood contrast of the images combined with typical artifacts. Several semi and fully automatic algorithms have been proposed for segmenting the endocardium in RT3DE data in order to extract relevant clinical indices, but a systematic and fair comparison between such methods has so far been impossible due to the lack of a publicly available common database. Here, we introduce a standardized evaluation framework to reliably evaluate and compare the performance of the algorithms developed to segment the LV border in RT3DE. A database consisting of 45 multivendor cardiac ultrasound recordings acquired at different centers with corresponding reference measurements from three experts are made available. The algorithms from nine research groups were quantitatively evaluated and compared using the proposed online platform. The results showed that the best methods produce promising results with respect to the experts' measurements for the extraction of clinical indices, and that they offer good segmentation precision in terms of mean distance error in the context of the experts' variability range. The platform remains open for new submissions.


Assuntos
Algoritmos , Ecocardiografia Tridimensional/métodos , Ventrículos do Coração/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Humanos
19.
Artigo em Inglês | MEDLINE | ID: mdl-24579117

RESUMO

Minimally invasive laparoscopic surgery is widely used for the treatment of cancer and other diseases. During the procedure, gas insufflation is used to create space for laparoscopic tools and operation. Insufflation causes the organs and abdominal wall to deform significantly. Due to this large deformation, the benefit of surgical plans, which are typically based on pre-operative images, is limited for real time navigation. In some recent work, intra-operative images, such as cone-beam CT or interventional CT, are introduced to provide updated volumetric information after insufflation. Other works in this area have focused on simulation of gas insufflation and exploited only the pre-operative images to estimate deformation. This paper proposes a novel registration method for pre- and intra-operative 3D image fusion for laparoscopic surgery. In this approach, the deformation of pre-operative images is driven by a biomechanical model of the insufflation process. The proposed method was validated by five synthetic data sets generated from clinical images and three pairs of in vivo CT scans acquired from two pigs, before and after insufflation. The results show the proposed method achieved high accuracy for both the synthetic and real insufflation data.


Assuntos
Imageamento Tridimensional/métodos , Laparoscopia/métodos , Modelos Biológicos , Pneumorradiografia/métodos , Técnica de Subtração , Cirurgia Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Animais , Simulação por Computador , Humanos , Aumento da Imagem/métodos , Imagem Multimodal/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Suínos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...